674 research outputs found

    INDIVIDUAL DIFFERENCES IN BRAIN ACTIVITIES WHEN HUMAN WISHES TO LISTEN TO MUSIC CONTINUOUSLY USING NEAR-INFRARED SPECTROSCOPY

    Get PDF
    This paper introduces an individual difference in the activities of the prefrontal cortex when a person wants to listen to music using near-infrared spectroscopy. The individual differences are confirmed by visualizing the variation in oxygenated hemoglobin level. The sensing positions used to record the brain activities are around the prefrontal cortex. The existence of individual differences was verified by experiments. The experiment results show that active positions while feeling a wish to listen to music are different in each subject, and an oxygenated hemoglobin level is different in each subject compared to its value when a subject does not feel the wish to listen to music. The experiment results show that it is possible to detect a wish to listen to the music based on changes in the oxygenated hemoglobin level. Also, these results suggest that active positions are different in each subject because the sensitivities and how to feel on stimulus are different. Lastly, the results suggest that it is possible to express the individual differences as differences in active positions

    EEG Analysis Method to Detect Unspoken Answers to Questions Using MSNNs

    Get PDF
    Brain–computer interfaces (BCI) facilitate communication between the human brain and computational systems, additionally offering mechanisms for environmental control to enhance human life. The current study focused on the application of BCI for communication support, especially in detecting unspoken answers to questions. Utilizing a multistage neural network (MSNN) replete with convolutional and pooling layers, the proposed method comprises a threefold approach: electroencephalogram (EEG) measurements, EEG feature extraction, and answer classification. The EEG signals of the participants are captured as they mentally respond with “yes” or “no” to the posed questions. Feature extraction was achieved through an MSNN composed of three distinct convolutional neural network models. The first model discriminates between the EEG signals with and without discernible noise artifacts, whereas the subsequent two models are designated for feature extraction from EEG signals with or without such noise artifacts. Furthermore, a support vector machine is employed to classify the answers to the questions. The proposed method was validated via experiments using authentic EEG data. The mean and standard deviation values for sensitivity and precision of the proposed method were 99.6% and 0.2%, respectively. These findings demonstrate the viability of attaining high accuracy in a BCI by preliminarily segregating the EEG signals based on the presence or absence of artifact noise and underscore the stability of such classification. Thus, the proposed method manifests prospective advantages of separating EEG signals characterized by noise artifacts for enhanced BCI performance

    Japanese sign language classification based on gathered images and neural networks

    Get PDF
    This paper proposes a method to classify words in Japanese Sign Language (JSL). This approach employs a combined gathered image generation technique and a neural network with convolutional and pooling layers (CNNs). The gathered image generation generates images based on mean images. Herein, the maximum difference value is between blocks of mean and JSL motions images. The gathered images comprise blocks that having the calculated maximum difference value. CNNs extract the features of the gathered images, while a support vector machine for multi-class classification, and a multilayer perceptron are employed to classify 20 JSL words. The experimental results had 94.1% for the mean recognition accuracy of the proposed method. These results suggest that the proposed method can obtain information to classify the sample words

    Japanese sign language classification based on gathered images and neural networks

    Get PDF
    This paper proposes a method to classify words in Japanese Sign Language (JSL). This approach employs a combined gathered image generation technique and a neural network with convolutional and pooling layers (CNNs). The gathered image generation generates images based on mean images. Herein, the maximum difference value is between blocks of mean and JSL motions images. The gathered images comprise blocks that having the calculated maximum difference value. CNNs extract the features of the gathered images, while a support vector machine for multi-class classification, and a multilayer perceptron are employed to classify 20 JSL words. The experimental results had 94.1% for the mean recognition accuracy of the proposed method. These results suggest that the proposed method can obtain information to classify the sample words

    Development of Eye Mouse Using EOG signals and Learning Vector Quantization Method

    Get PDF
    Recognition of eye motions has attracted more and more attention of researchers all over the world in recent years. Compared with other body movements, eye motion is responsive and needs a low consumption of physical strength. In particular, for patients with severe physical disabilities, eye motion is the last spontaneous motion for them to make a respond. In order to provide an efficient means of communication for patients such as ALS (amyotrophic lateral sclerosis) who cannot move even their muscles except eye, in this paper we proposed a system that uses EOG signals and Learning Vector Quantization algorithm to recognize eye motions. According to recognition results, we use API (application programming interface) to control cursor movements. This system would be used as a means of communication to help ALS patients

    Lost Property Detection by Template Matching using Genetic Algorithm and Random Search

    Get PDF
    In this paper, we propose an object search method which is adapted to transformation of an object to be searched to detect lost property. Object search is divided into two types; global and local searches. We used a template matching using Genetic Algorithm (GA) in the global search. Moreover we use a random search in the local search. According to experimental results, this system can detect rough position of the object to be searched. The search accuracy obtained using the present method is 83.6%, and that of a comparative experiment using only GA is 42.1%. We have verified that our proposed method is effective for lost property detection. In the future, we need to increase search accuracy to search objects more stably. In particular, we need to improve local search
    • …
    corecore